59 research outputs found

    On the completeness of ensembles of motion planners for decentralized planning

    Get PDF
    We provide a set of sufficient conditions to establish the completeness of an ensemble of motion planners-that is, a set of loosely-coupled motion planners that produce a unified result. The planners are assumed to divide the total planning problem across some parameter space(s), such as task space, state space, action space, or time. Robotic applications have employed ensembles of planners for decades, although the concept has not been formally unified or analyzed until now. We focus on applications in multi-robot navigation and collision avoidance. We show that individual resolutionor probabilistically-complete planners that meet certain communication criteria constitute a (respectively, resolution- or probabilistically-) complete ensemble of planners. This ensemble of planners, in turn, guarantees that the robots are free of deadlock, livelock, and starvation.Boeing Compan

    IkeaBot: An autonomous multi-robot coordinated furniture assembly system

    Get PDF
    We present an automated assembly system that directs the actions of a team of heterogeneous robots in the completion of an assembly task. From an initial user-supplied geometric specification, the system applies reasoning about the geometry of individual parts in order to deduce how they fit together. The task is then automatically transformed to a symbolic description of the assembly-a sort of blueprint. A symbolic planner generates an assembly sequence that can be executed by a team of collaborating robots. Each robot fulfills one of two roles: parts delivery or parts assembly. The latter are equipped with specialized tools to aid in the assembly process. Additionally, the robots engage in coordinated co-manipulation of large, heavy assemblies. We provide details of an example furniture kit assembled by the system.Boeing Compan

    Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation Learning

    Full text link
    We introduce a method for following high-level navigation instructions by mapping directly from images, instructions and pose estimates to continuous low-level velocity commands for real-time control. The Grounded Semantic Mapping Network (GSMN) is a fully-differentiable neural network architecture that builds an explicit semantic map in the world reference frame by incorporating a pinhole camera projection model within the network. The information stored in the map is learned from experience, while the local-to-world transformation is computed explicitly. We train the model using DAggerFM, a modified variant of DAgger that trades tabular convergence guarantees for improved training speed and memory use. We test GSMN in virtual environments on a realistic quadcopter simulator and show that incorporating an explicit mapping and grounding modules allows GSMN to outperform strong neural baselines and almost reach an expert policy performance. Finally, we analyze the learned map representations and show that using an explicit map leads to an interpretable instruction-following model.Comment: To appear in Robotics: Science and Systems (RSS), 201

    Single assembly robot in search of human partner: Versatile grounded language generation

    Get PDF
    We describe an approach for enabling robots to recover from failures by asking for help from a human partner. For example, if a robot fails to grasp a needed part during a furniture assembly task, it might ask a human partner to “Please hand me the white table leg near you.” After receiving the part from the human, the robot can recover from its grasp failure and continue the task autonomously. This paper describes an approach for enabling a robot to automatically generate a targeted natural language request for help from a human partner. The robot generates a natural language description of its need by minimizing the entropy of the command with respect to its model of language understanding for the human partner, a novel approach to grounded language generation. Our long-term goal is to compare targeted requests for help to more open-ended requests where the robot simply asks “Help me,” demonstrating that targeted requests are more easily understood by human partners

    RF-compass: Robot object manipulation using RFIDs

    Get PDF
    Modern robots have to interact with their environment, search for objects, and move them around. Yet, for a robot to pick up an object, it needs to identify the object's orientation and locate it to within centimeter-scale accuracy. Existing systems that provide such information are either very expensive (e.g., the VICON motion capture system valued at hundreds of thousands of dollars) and/or suffer from occlusion and narrow field of view (e.g., computer vision approaches). This paper presents RF-Compass, an RFID-based system for robot navigation and object manipulation. RFIDs are low-cost and work in non-line-of-sight scenarios, allowing them to address the limitations of existing solutions. Given an RFID-tagged object, RF-Compass accurately navigates a robot equipped with RFIDs toward the object. Further, it locates the center of the object to within a few centimeters and identifies its orientation so that the robot may pick it up. RF-Compass's key innovation is an iterative algorithm formulated as a convex optimization problem. The algorithm uses the RFID signals to partition the space and keeps refining the partitions based on the robot's consecutive moves.We have implemented RF-Compass using USRP software radios and evaluated it with commercial RFIDs and a KUKA youBot robot. For the task of furniture assembly, RF-Compass can locate furniture parts to a median of 1.28 cm, and identify their orientation to a median of 3.3 degrees.National Science Foundation (U.S.
    • …
    corecore